2,868 research outputs found

    Finally a Case for Collaborative VR?: The Need to Design for Remote Multi-Party Conversations

    Full text link
    Amid current social distancing measures requiring people to work from home, there has been renewed interest on how to effectively converse and collaborate remotely utilizing currently available technologies. On the surface, VR provides a perfect platform for effective remote communication. It can transfer contextual and environmental cues and facilitate a shared perspective while also allowing people to be virtually co-located. Yet we argue that currently VR is not adequately designed for such a communicative purpose. In this paper, we outline three key barriers to using VR for conversational activity : (1) variability of social immersion, (2) unclear user roles, and (3) the need for effective shared visual reference. Based on this outline, key design topics are discussed through a user experience design perspective for considerations in a future collaborative design framework

    March Madness: NCAA Tournament Participation and College Alcohol Use

    Get PDF
    While athletic success may improve the visibility of a university to prospective students and thereby benefit the school, it may also increase risky behavior in the current student body. Using the Harvard School of Public Health College Alcohol Study, we find that a school\u27s participation in the NCAA Basketball Tournament is associated with a 47% increase in binge drinking by male students at that school. Additionally, we find evidence that drunk driving increases by 5% among all students during the tournament. (JEL I12, I23, Z28

    Mapping Theoretical and Methodological Perspectives for Understanding Speech Interface Interactions

    Get PDF
    CHI 2019: The ACM CHI Conference on Human Factors in Computing Systems - Weaving the Threads of CHI, Glasgow, United Kingdom, 4-9 May 2019The use of speech as an interaction modality has grown considerably through the integration of Intelligent Personal Assistants (IPAs- e.g. Siri, Google Assistant) into smartphones and voice based devices (e.g. Amazon Echo). However, there remain significant gaps in using theoretical frameworks to understand user behaviours and choices and how they may applied to specific speech interface interactions. This part-day multidisciplinary workshop aims to critically map out and evaluate the- oretical frameworks and methodological approaches across a number of disciplines and establish directions for new paradigms in understanding speech interface user behaviour. In doing so, we will bring together participants from HCI and other speech related domains to establish a cohesive, diverse and collaborative community of researchers from academia and industry with interest in exploring theoretical and methodological issues in the field.Irish Research Counci

    CUI@CSCW: Collaborating through Conversational User Interfaces

    Get PDF
    This virtual workshop seeks to bring together the burgeoning communities centred on the design, development, application and study of so-called Conversational User Interfaces (CUIs). CUIs are used in myriad contexts, from online support chatbots through to entertainment devices in the home. In this workshop, we will examine the challenges involved in transforming CUIs into everyday computing devices capable of supporting collaborative activities across space and time. Additionally, this workshop seeks to establish a cohesive CUI community and research agenda within CSCW. We will examine the roles in which CSCW research can contribute insights into understanding how CUIs are or can be used in a variety of settings, from public to private, and how they can be brought into a potentially unlimited number of tasks. This proposed workshop will bring together researchers from academia and practitioners from industry to survey the state-of-the-art in terms of CUI design, use, and understanding, and will map new areas for work including addressing the technical, social, and ethical challenges that lay ahead. By bringing together existing researchers and new ideas in this space, we intend to foster a strong community and enable potential future collaborations

    An Empirical Study of Topic Transition in Dialogue

    Full text link
    Transitioning between topics is a natural component of human-human dialog. Although topic transition has been studied in dialogue for decades, only a handful of corpora based studies have been performed to investigate the subtleties of topic transitions. Thus, this study annotates 215 conversations from the switchboard corpus and investigates how variables such as length, number of topic transitions, topic transitions share by participants and turns/topic are related. This work presents an empirical study on topic transition in switchboard corpus followed by modelling topic transition with a precision of 83% for in-domain(id) test set and 82% on 10 out-of-domain}(ood) test set. It is envisioned that this work will help in emulating human-human like topic transition in open-domain dialog systems.Comment: 5 pages, 4 figures, 3 table

    What Do We See in Them? Identifying Dimensions of Partner Models for Speech Interfaces Using a Psycholexical Approach

    Get PDF
    Perceptions of system competence and communicative ability, termed partner models, play a significant role in speech interface interaction. Yet we do not know what the core dimensions of this concept are. Taking a psycholexical approach, our paper is the first to identify the key dimensions that define partner models in speech agent interaction. Through a repertory grid study (N=21), a review of key subjective questionnaires, an expert review of resulting word pairs and an online study of 356 users of speech interfaces, we identify three key dimensions that make up a users’ partner model: 1) perceptions towards partner competence and dependability; 2) assessment of human-likeness; and 3) a system’s perceived cognitive flexibility. We discuss the implications for partner modelling as a concept, emphasising the importance of salience and the dynamic nature of these perceptions

    Mental Workload and Language Production in Non-Native Speaker IPA Interaction

    Get PDF
    Through proliferation on smartphones and smart speakers, intel- ligent personal assistants (IPAs) have made speech a common in- teraction modality. Yet, due to linguistic coverage and varying lev- els of functionality, many speakers engage with IPAs using a non- native language. This may impact the mental workload and pat- tern of language production displayed by non-native speakers. We present a mixed-design experiment, wherein native (L1) and non- native (L2) English speakers completed tasks with IPAs through smartphones and smart speakers. We found significantly higher mental workload for L2 speakers during IPA interactions. Contrary to our hypotheses, we found no significant differences between L1 and L2 speakers in terms of number of turns, lexical complexity, diversity, or lexical adaptation when encountering errors. These findings are discussed in relation to language production and pro- cessing load increases for L2 speakers in IPA interaction

    See What I’m Saying? Comparing Intelligent Personal Assistant Use for Native and Non-Native Language Speakers

    Get PDF
    Limited linguistic coverage for Intelligent Personal Assistants (IPAs) means that many interact in a non-native language. Yet we know little about how IPAs currently support or hinder these users. Through native (L1) and non-native (L2) English speakers interacting with Google Assistant on a smartphone and smart speaker, we aim to understand this more deeply. Interviews revealed that L2 speakers prioritised utterance planning around perceived linguistic limitations, as opposed to L1 speakers prioritising succinctness because of system limitations. L2 speakers see IPAs as insensitive to linguistic needs resulting in failed interaction. L2 speakers clearly preferred using smartphones, as visual feedback supported diagnoses of communication breakdowns whilst allowing time to process query results. Conversely, L1 speakers preferred smart speakers, with audio feedback being seen as sufficient. We discuss the need to tailor the IPA experience for L2 users, emphasising visual feedback whilst reducing the burden of language production
    • 

    corecore